The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find \textit{untrained sparse subnetworks} at the initialization, that can match the performance of \textit{fully trained dense} GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB).
translated by 谷歌翻译
来自计算机断层扫描血管造影(CTA)的肾脏结构分割对于许多计算机辅助的肾脏癌治疗应用至关重要。肾脏解析〜(KIPA 2022)挑战旨在建立细粒度的多结构数据集并改善多个肾脏结构的分割。最近,U-NET主导了医疗图像分割。在KIPA挑战中,我们评估了几个U-NET变体,并选择了最终提交的最佳模型。
translated by 谷歌翻译
由于文档的复杂布局,提取文档的信息是一项挑战。大多数以前的研究以一种自我监督的方式开发了多模式预训练的模型。在本文中,我们专注于包含文本和布局信息的单词块的嵌入学习,并提出UTEL,这是具有统一文本和布局预训练的语言模型。具体而言,我们提出了两个预训练任务:布局学习的周围单词预测(SWP),以及对识别不同单词块的单词嵌入(CWE)的对比度学习。此外,我们用1D剪裁的相对位置嵌入了常用的一维位置。这样,掩盖布局语言建模(MLLM)的联合训练和两个新提出的任务可以以统一的方式在语义和空间特征之间进行相互作用。此外,提议的UTEL可以通过删除1D位置嵌入,同时保持竞争性能来处理任意长度的序列。广泛的实验结果表明,UTEL学会了比以前在各种下游任务上的方法更好的联合表示形式,尽管不需要图像模式。代码可在\ url {https://github.com/taosong2019/utel}中获得。
translated by 谷歌翻译
机器学习和认知科学的最新工作表明,了解因果信息对于智力的发展至关重要。使用``Blicket otter''环境的认知科学的广泛文献表明,孩子们擅长多种因果推理和学习。我们建议将该环境适应机器​​学习代理。当前机器学习算法的关键挑战之一是建模和理解因果关系:关于因果关系集的可转移抽象假设。相比之下,即使是幼儿也会自发学习和使用因果关系。在这项工作中,我们提出了一个新的基准 - 一种灵活的环境,可以评估可变因果溢出物下的现有技术 - 并证明许多现有的最新方法在这种环境中概括了困难。该基准的代码和资源可在https://github.com/cannylab/casual_overhypothess上获得。
translated by 谷歌翻译
我们向Smartwatches提出智能解决方案,以评估洗手,以提高用户在高质量洗手中的意识和培养习惯。UWASH可以识别洗手的起始/偏移,测量每个手势的持续时间,并根据谁的指导来评分每个手势以及整个过程。从技术上讲,我们将洗手评估的任务称为计算机愿景中的语义分割问题,并提出了一种轻量级的Unet-Like Network,只有496英尺,有效地实现它。超过51个科目的实验表明,UWASH对样本 - 明智的洗手手势识别的准确性为92.27 \%,每次开始/偏移检测中的$ <$ 0.5 \ textit {秒}错误,以及100次\ extitIT {points}错误的$ <$ 5在用户依赖的设置中得分,虽然在交叉用户评估和交叉用户交叉位置评估中仍然有前景。
translated by 谷歌翻译
基于检索的对话响应选择旨在为给定多转中下文找到候选集的正确响应。基于预先训练的语言模型(PLMS)的方法对此任务产生了显着的改进。序列表示在对话背景和响应之间的匹配程度中扮演关键作用。然而,我们观察到相同上下文共享的不同的上下文响应对始终在由PLM计算的序列表示中具有更大的相似性,这使得难以区分来自负面的正响应。由此激励,我们提出了一种基于PLMS的响应选择任务的新颖\ TextBF {f} ine- \ textbf {g}下载\ textbf {g} unfrstive(fgc)学习方法。该FGC学习策略有助于PLMS在细粒中产生每个对话的更可区分的匹配表示,并进一步提高选择正反应的预测。两个基准数据集的实证研究表明,所提出的FGC学习方法一般可以提高现有PLM匹配模型的模型性能。
translated by 谷歌翻译
本文认为,考虑了深神经网络(DNN)训练中最佳梯度无损压缩的问题。渐变压缩在许多分布式DNN培训方案中是相关的,包括最近流行的联合学习(FL)场景,其中每个远程用户通过无噪声限制通道连接到参数服务器(PS)。在分布式DNN培训中,如果可用的底层梯度分布,则可以使用经典的无损压缩方法来减少传送渐变条目所需的比特数。平均场分析表明,梯度更新可以被认为是独立的随机变量,而拉普拉斯近似可以用来争论梯度具有近似于某些制度中的正常(范数)分布的分布。在本文中,我们认为,对于某些实际兴趣的网络,梯度条目可以很好地建模为具有广义的正常(Gennorm)分布。我们提供了数值评估,以验证假设进流模型提供了对DNN梯度尾部分布的更准确的预测。此外,在将诸如Huffman编码的经典修复到可变无损编码算法应用于量化的梯度更新,该建模选择在梯度的无损压缩方面提供了具体的改进。后一种结果确实提供了一种有效的压缩策略,具有较低的内存和计算复杂性,在分布式DNN培训场景中具有很大的实际相关性。
translated by 谷歌翻译
随着深度学习技术的快速发展,各种最近的工作试图应用图形神经网络(GNN)来解决诸如布尔满足(SAT)之类的NP硬问题,这表明了桥接机器学习与象征性差距的潜力。然而,GNN预测的解决方案的质量并未在文献中进行很好地研究。在本文中,我们研究了GNNS在学习中解决最大可满足性(MaxSAT)问题的能力,从理论和实践角度来看。我们构建了两种GNN模型来学习来自基准的MaxSAT实例的解决方案,并显示GNN通过实验评估解决MaxSAT问题的有吸引力。我们还基于算法对准理论,我们还提出了GNNS可以在一定程度上学会解决MaxSAT问题的影响的理论解释。
translated by 谷歌翻译
本文介绍了用于增量平滑和映射(NF-ISAM)的归一化流,这是一种新型算法,用于通过非线性测量模型和非高斯因素来推断SLAM问题中完整的后验分布。NF-ISAM利用了神经网络的表达能力,并将正常的流量训练以建模和对完整的后部进行采样。通过利用贝叶斯树,NF-ISAM启用了类似于ISAM2的有效增量更新,尽管在更具挑战性的非高斯环境中。我们证明了NF-ISAM使用数据关联模棱两可的仅范围的SLAM问题来证明NF-ISAM比最先进的点和分布估计算法的优势。NF-ISAM在描述连续变量(例如位置)和离散变量(例如数据关联)的后验信仰方面提出了卓越的准确性。
translated by 谷歌翻译